knight landing
Why Intel Is Tweaking Xeon Phi For Deep Learning
If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today. Software will run most efficiently on hardware that is tuned for it, although we are used to thinking of that process in a mirror image, where programmers tweak their code to take advantage of the forward-looking features a chip maker conceives of four or five years before they are etched into its transistors and delivered as a product. The competition is fierce these days, and Intel has to move fast if it is to keep its compute hegemony in the datacenter. That is why at the Intel Developer Forum in San Francisco the company put a new path on the Knights family of many-core processors that will see the company deliver a version of this chip specifically tuned for machine learning workloads.
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Asia > Taiwan (0.04)
Why Intel Is Tweaking Xeon Phi For Deep Learning
If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today. Software will run most efficiently on hardware that is tuned for it, although we are used to thinking of that process in a mirror image, where programmers tweak their code to take advantage of the forward-looking features a chip maker conceives of four or five years before they are etched into its transistors and delivered as a product. The competition is fierce these days, and Intel has to move fast if it is to keep its compute hegemony in the datacenter. That is why at the Intel Developer Forum in San Francisco the company put a new path on the Knights family of many-core processors that will see the company deliver a version of this chip specifically tuned for machine learning workloads.
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Asia > Taiwan (0.04)
Why Intel Is Tweaking Xeon Phi For Deep Learning
If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today. Software will run most efficiently on hardware that is tuned for it, although we are used to thinking of that process in a mirror image, where programmers tweak their code to take advantage of the forward-looking features a chip maker conceives of four or five years before they are etched into its transistors and delivered as a product. The competition is fierce these days, and Intel has to move fast if it is to keep its compute hegemony in the datacenter. That is why at the Intel Developer Forum in San Francisco the company put a new path on the Knights family of many-core processors that will see the company deliver a version of this chip specifically tuned for machine learning workloads.
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Asia > Taiwan (0.04)
Knights Landing Will Waterfall Down From On High
With the general availability of the "Knights Landing" Xeon Phi many core processors from Intel last month, some of the largest supercomputing labs on the planet are getting their first taste of what the future style of high performance computing could look like for the rest of us. We are not suggesting that the Xeon Phi processor will be the only compute engine that will be deployed to run traditional simulation and modeling applications as well as data analytics, graph processing, and deep learning algorithms. But we are suggesting that this style of compute engine – it is more than a processor since it includes high bandwidth memory and fabric interconnect adapters on a single package – is what the future looks like. And that goes for Knights family processors and co-processors as well as the "Pascal" and "Volta" accelerators made by Nvidia, the Sparc64-XIfx and ARM chips that will be used in the used in the Post-K system in Japan made by Fujitsu, the Matrix2000 DSP accelerator being created by China for one of its pre-exascale systems, or the CPU-GPU hybrids based on its "Zen" Opterons that AMD is cooking up for supercomputing systems in the United States and, with licensing partners, in China. During the recent ISC16 supercomputing conference in Frankfurt, Germany, Intel gathered up the executives in charge of some of the largest supercomputing facilities on the planet who are also – not coincidentally, but absolutely intentionally – also early adopters of the Knights Landing Xeon Phi and, in some cases, the Omni-Path interconnect that is a kicker to Intel's True Scale InfiniBand networking.
- Asia > China (0.45)
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.24)
- Asia > Japan (0.24)
- (2 more...)
- Energy (1.00)
- Information Technology (0.89)
- Government > Regional Government (0.46)
Intel's Knights Landing Is Finally Here - Artificial Intelligence Online
Graphics-chip company NVIDIA (NASDAQ:NVDA) has dominated the market for accelerators in recent years, with its Tesla GPUs being used for both high-performanceIn this point alphabet can not compete with Apple. Tesla has become a big business for NVIDIA -- during the past 12 months, the company's data-center segment generated nearly 400 million of revenue. Intel (NASDAQ:INTC) has been eyeing this market for some time, but its Xeon Phi line of accelerator cards has so far failed to make much of an impact. Knights Landing, the latest Xeon Phi product from Intel, could change that. I first talked about Knights Landing two years ago, and following a major delay, the product is finally shipping in volume to customers.
- North America > United States > California (0.06)
- North America > United States > New York > New York County > New York City (0.05)
- Banking & Finance > Trading (0.57)
- Information Technology > Services (0.38)
Intel's megachips will take on Nvidia's GPUs and Google's TPUs
Intel's chip arsenal appears to have some glaring weaknesses. One of them is the lack of a high-end graphics processor, which is important for gaming, virtual reality and machine learning. However, the company does have powerful alternatives: two monster chips that will be ammunition to take on GPUs and rival chips in the areas of machine learning and supercomputing, which are important to the company. In 2018, Intel will likely release a faster and more power-efficient Xeon Phi, a supercomputing chip that is already used in some of the world's fastest computers. Intel is also looking beyond CPUs to FPGAs (field programmable gate arrays), which can be faster at key tasks.
Intel Launches 'Knights Landing' Phi Family for HPC, Machine Learning
From ISC 2016 in Frankfurt, Germany, this week, Intel Corp. launched the second-generation Xeon Phi product family, formerly code-named Knights Landing, aimed at HPC and machine learning workloads. The company had been shipping "Knights Landing" silicon to early customers for the last six months and was waiting to ramp up production before making the product generally available. The window also gave OEMs time to complete their readiness, said Intel's Charlie Wuischpard, vice president of the Data Center Group and general manager of High Performance Computing Platform Group, in a media pre-briefing. Those OEMs include the usual names: Cray, HPE, Lenovo, Dell and others. The most distinguishing feature of the chip is that it's a bootable host CPU -- unlike its predecessor "Knights Corner," which is a coprocessor that connects over PCIe.
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.25)
- North America > United States > Texas (0.05)
Intel's megachips will take on Nvidia's GPUs and Google's TPUs
Intel's chip arsenal appears to have some glaring weaknesses. One of them is the lack of a high-end graphics processor, which is important for gaming, virtual reality and machine learning. However, the company does have powerful alternatives: two monster chips that will be ammunition to take on GPUs and rival chips in the areas of machine learning and supercomputing, which are important to the company. In 2018, Intel will likely release a faster and more power-efficient Xeon Phi, a supercomputing chip that is already used in some of the world's fastest computers. Intel is also looking beyond CPUs to FPGAs (field programmable gate arrays), which can be faster at key tasks.
Intel's data center chief talks about machine learning without GPUs
If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.
Intel's data center chief talks machine learning -- just don't ask about GPUs
If you want to get under Diane Bryant's skin these days, just ask her about GPUs. The head of Intel's powerful data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning. Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It's key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning. "It's a big opportunity, and there will be a hockey stick where every business will be using machine learning," she said in an interview.